170 research outputs found

    Highly parallel computation

    Get PDF
    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed

    Unums 2.0: An Interview with John L. Gustafson

    Get PDF
    In an earlier interview (April 2016), Ubiquity spoke with John Gustafson about the unum, a new format for floating point numbers. The unique property of unums is that they always know how many digits of accuracy they have. Now Gustafson has come up with yet another format that, like the unum 1.0, always knows how accurate it is. But it also allows an almost arbitrary mapping of bit patterns to the reals. In doing so, it paves the way for custom number systems that squeeze the maximum accuracy out of a given number of bits. This new format could have prime applications in deep learning, big data, and exascale computing

    Modula-2*: An extension of Modula-2 for highly parallel programs

    Get PDF
    Parallel programs should be machine-independent, i.e., independent of properties that are likely to differ from one parallel computer to the next. Extensions are described of Modula-2 for writing highly parallel, portable programs meeting these requirements. The extensions are: synchronous and asynchronous forms of forall statement; and control of the allocation of data to processors. Sample programs written with the extensions demonstrate the clarity of parallel programs when machine-dependent details are omitted. The principles of efficiently implementing the extensions on SIMD, MIMD, and MSIMD machines are discussed. The extensions are small enough to be integrated easily into other imperative languages

    The String-to-String Correction Problem with Block Moves

    Get PDF

    Multicore and Empirical Research

    Get PDF

    STORK: An Experimental Migrating File System for Computer Networks

    Get PDF

    04051 Abstracts Collection -- Perspectives Workshop: Empirical Theory and the Science of Software Engineering

    Get PDF
    From 25.01.04 to 29.01.04, the Dagstuhl Seminar 04051 ``Perspectives Workshop: Empirical Theory and the Science of Software Engineering\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    What Do Programmers of Parallel Machines Need? A Survey

    Get PDF
    We performed semistructured, open-ended interviews with 11 professional developers of parallel, scientific applications to determine how their programming time is spent and where tools could improve productivity. The subjects were selected from a variety of research laboratories, both industrial and governmental. The major findings were that programmers would prefer a global over a per-processor view of data structures, struggle with load balancing and optimizations, and need interactive tools for observing the behavior of parallel programs. Furthermore, handling and processing massive amounts of data in parallel is emerging as a new challenge
    • …
    corecore